New Times,
New Thinking.

  1. Ideas
17 July 2024

Thought experiment 1: The Chinese Room

The American philosopher John Searle’s defence of human intelligence now has to confront today’s sophisticated AI algorithms.

By David Edmonds

You’re in a locked room. Somebody slips a note under the door. Although you can’t read the language, you recognise that it’s Chinese. In the room with you is a manual of instructions for how to manipulate these symbols. By following these instructions, you produce a response, which you pass back under the door. Another note is then passed to you, and again you turn to your manual to generate a reply. The process continues, back and forth. 

Welcome to the Chinese Room – one of the most famous thought experiments. It was devised by John Searle, the renowned philosopher of mind, and, until a sexual harassment scandal, an emeritus professor at the University of California, Berkeley.

Here’s his point. The person or people on the other side of the door may believe that you understand Chinese. But, clearly, thought Searle, they’re mistaken. They’ve been bamboozled. And just as you do not understand Chinese, so a computer, which operates in much the same way, manipulating symbols, plugging inputs into a machine, to produce outputs, also cannot understand Chinese. The conclusion, that intelligence cannot be reduced to computation, can be expressed in the jargon of linguistics; understanding syntax (the rules of the language) isn’t sufficient for understanding semantics (the meanings of words). The Chinese Room showed that the promise of artificial general intelligence – computers having genuine thoughts – was illusory.  

But not everyone agrees. The late philosopher Daniel Dennett coined the term “intuition pump” in response to the Chinese Room. An intuition pump is a story, not a formal argument. It’s told to elicit a certain intuition and to stimulate us to think about a problem afresh. Sometimes intuition pumps are helpful. But the Chinese Room, Dennett claimed, was deeply unhelpful. The set-up gives the impression that the person in the room could quickly react to a note by flicking through a manual. But language is more complex than this. It would take billions of lines of code to imitate human linguistic ability, he said. And Dennett thought that we couldn’t conceive of a computer pulling this off.

The Chinese Room appeared in 1980, the year of the release of the Sinclair ZX80 personal computer. Despite its clunky keyboard, the ZX80 was a consumer hit. It was early days in the tech revolution – email, Google and the iPhone were in the future and the best chess computer programs in the world were no match for the best humans. Four decades later – by which time the ZX80 had become a museum piece – GPT-3 was released. The world’s best chess player could now be beaten by your mobile phone, but GPT-3 and its successors were more remarkable still. GPT-3 was an LLM (large language model), with the headline-grabbing ability to generate text in response to a question or prompt. Computers, it seemed, could indeed simulate understanding.

That same year, a linguistics professor, Emily Bender, with co-author Alexander Koller, published a paper about an octopus. Its purpose was to puncture the pretensions of our computer age – it shared this in common with the Chinese Room. In their thought experiment, you are to imagine that two English speakers, “A” and “B”, living alone on separate islands, are communicating through an underwater cable system, sending text messages back and forth. An octopus taps in to these messages. The octopus doesn’t understand English, but it is a statistical whizz, and over time it spots patterns in the communication.

One day, feeling lonely, it decides to cut the wire. “A” continues to text “B”, but unbeknown to “A”, it is now conversing with the octopus, who, using its data set, can predict how “B” would respond. “A” notices nothing. 

Give a gift subscription to the New Statesman this Christmas from just £49

This is the astonishing trick that LLMs can pull off. But wait – how long can this charade continue? Suppose one day “A” is chased by an angry bear. This has never happened before. “A” sends an urgent text. “Help! How can I make a weapon to defend myself?” The octopus’s reply is bound to be fishy. For it has no data to enable it to predict an adequate response.

LLMs are predictive algorithms, using probabilistic calculations based on data sets. They don’t “understand”. And they’re capable of “hallucinating” – when the algorithm predicts incorrectly, spitting out a false and possibly ludicrous response.

What are we to conclude? Perhaps the brain is in some sense a kind of computer. But our brains are just a part of a more complex system that allows us to navigate through the world. Computation alone, says the philosopher Tim Crane – professor of philosophy at the Central European University and author of The Mechanical Mind – is not enough for understanding: “But give the Chinese Room legs, give it a nose, hands and ears, and then it starts to seem less obvious that this system can’t ‘understand’.”

[See also: Welcome to the realm of the thought experiment]

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football

Topics in this article : ,

This article appears in the 17 Jul 2024 issue of the New Statesman, The American Berserk